Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sensors (Basel) ; 21(10)2021 May 11.
Artigo em Inglês | MEDLINE | ID: mdl-34064805

RESUMO

Over the last few years, mobile robotics has experienced a great development thanks to the wide variety of problems that can be solved with this technology. An autonomous mobile robot must be able to operate in a priori unknown environments, planning its trajectory and navigating to the required target points. With this aim, it is crucial solving the mapping and localization problems with accuracy and acceptable computational cost. The use of omnidirectional vision systems has emerged as a robust choice thanks to the big quantity of information they can extract from the environment. The images must be processed to obtain relevant information that permits solving robustly the mapping and localization problems. The classical frameworks to address this problem are based on the extraction, description and tracking of local features or landmarks. However, more recently, a new family of methods has emerged as a robust alternative in mobile robotics. It consists of describing each image as a whole, what leads to conceptually simpler algorithms. While methods based on local features have been extensively studied and compared in the literature, those based on global appearance still merit a deep study to uncover their performance. In this work, a comparative evaluation of six global-appearance description techniques in localization tasks is carried out, both in terms of accuracy and computational cost. Some sets of images captured in a real environment are used with this aim, including some typical phenomena such as changes in lighting conditions, visual aliasing, partial occlusions and noise.

2.
Sensors (Basel) ; 20(3)2020 Feb 08.
Artigo em Inglês | MEDLINE | ID: mdl-32046293

RESUMO

Visual sensors have characteristics that make them interesting as sources of information for any process or system [...].

3.
Sensors (Basel) ; 18(7)2018 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-29949916

RESUMO

This work presents a visual information fusion approach for robust probability-oriented feature matching. It is sustained by omnidirectional imaging, and it is tested in a visual localization framework, in mobile robotics. General visual localization methods have been extensively studied and optimized in terms of performance. However, one of the main threats that jeopardizes the final estimation is the presence of outliers. In this paper, we present several contributions to deal with that issue. First, 3D information data, associated with SURF (Speeded-Up Robust Feature) points detected on the images, is inferred under the Bayesian framework established by Gaussian processes (GPs). Such information represents a probability distribution for the feature points’ existence, which is successively fused and updated throughout the robot’s poses. Secondly, this distribution can be properly sampled and projected onto the next 2D image frame in t+1, by means of a filter-motion prediction. This strategy permits obtaining relevant areas in the image reference system, from which probable matches could be detected, in terms of the accumulated probability of feature existence. This approach entails an adaptive probability-oriented matching search, which accounts for significant areas of the image, but it also considers unseen parts of the scene, thanks to an internal modulation of the probability distribution domain, computed in terms of the current uncertainty of the system. The main outcomes confirm a robust feature matching, which permits producing consistent localization estimates, aided by the odometer’s prior to estimate the scale factor. Publicly available datasets have been used to validate the design and operation of the approach. Moreover, the proposal has been compared, firstly with a standard feature matching and secondly with a localization method, based on an inverse depth parametrization. The results confirm the validity of the approach in terms of feature matching, localization accuracy, and time consumption.

4.
PLoS One ; 12(5): e0175938, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28464032

RESUMO

Along the past years, mobile robots have proliferated both in domestic and in industrial environments to solve some tasks such as cleaning, assistance, or material transportation. One of their advantages is the ability to operate in wide areas without the necessity of introducing changes into the existing infrastructure. Thanks to the sensors they may be equipped with and their processing systems, mobile robots constitute a versatile alternative to solve a wide range of applications. When designing the control system of a mobile robot so that it carries out a task autonomously in an unknown environment, it is expected to take decisions about its localization in the environment and about the trajectory that it has to follow in order to arrive to the target points. More concisely, the robot has to find a relatively good solution to two crucial problems: building a model of the environment, and estimating the position of the robot within this model. In this work, we propose a framework to solve these problems using only visual information. The mobile robot is equipped with a catadioptric vision sensor that provides omnidirectional images from the environment. First, the robot goes along the trajectories to include in the model and uses the visual information captured to build this model. After that, the robot is able to estimate its position and orientation with respect to the trajectory. Among the possible approaches to solve these problems, global appearance techniques are used in this work. They have emerged recently as a robust and efficient alternative compared to landmark extraction techniques. A global description method based on Radon Transform is used to design mapping and localization algorithms and a set of images captured by a mobile robot in a real environment, under realistic operation conditions, is used to test the performance of these algorithms.

5.
Sensors (Basel) ; 15(10): 26368-95, 2015 Oct 16.
Artigo em Inglês | MEDLINE | ID: mdl-26501289

RESUMO

This work presents some methods to create local maps and to estimate the position of a mobile robot, using the global appearance of omnidirectional images. We use a robot that carries an omnidirectional vision system on it. Every omnidirectional image acquired by the robot is described only with one global appearance descriptor, based on the Radon transform. In the work presented in this paper, two different possibilities have been considered. In the first one, we assume the existence of a map previously built composed of omnidirectional images that have been captured from previously-known positions. The purpose in this case consists of estimating the nearest position of the map to the current position of the robot, making use of the visual information acquired by the robot from its current (unknown) position. In the second one, we assume that we have a model of the environment composed of omnidirectional images, but with no information about the location of where the images were acquired. The purpose in this case consists of building a local map and estimating the position of the robot within this map. Both methods are tested with different databases (including virtual and real images) taking into consideration the changes of the position of different objects in the environment, different lighting conditions and occlusions. The results show the effectiveness and the robustness of both methods.

6.
Sensors (Basel) ; 14(2): 3033-64, 2014 Feb 14.
Artigo em Inglês | MEDLINE | ID: mdl-24534922

RESUMO

Map building and localization are two crucial abilities that autonomous robots must develop. Vision sensors have become a widespread option to solve these problems. When using this kind of sensors, the robot must extract the necessary information from the scenes to build a representation of the environment where it has to move and to estimate its position and orientation with robustness. The techniques based on the global appearance of the scenes constitute one of the possible approaches to extract this information. They consist in representing each scene using only one descriptor which gathers global information from the scene. These techniques present some advantages comparing to other classical descriptors, based on the extraction of local features. However, it is important a good configuration of the parameters to reach a compromise between computational cost and accuracy. In this paper we make an exhaustive comparison among some global appearance descriptors to solve the mapping and localization problem. With this aim, we make use of several image sets captured in indoor environments under realistic working conditions. The datasets have been collected using an omnidirectional vision sensor mounted on the robot.

7.
Sensors (Basel) ; 10(5): 5209-32, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-22399930

RESUMO

In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.

8.
Sensors (Basel) ; 10(12): 11468-97, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-22163538

RESUMO

In this paper we deal with the problem of map building and localization of a mobile robot in an environment using the information provided by an omnidirectional vision sensor that is mounted on the robot. Our main objective consists of studying the feasibility of the techniques based in the global appearance of a set of omnidirectional images captured by this vision sensor to solve this problem. First, we study how to describe globally the visual information so that it represents correctly locations and the geometrical relationships between these locations. Then, we integrate this information using an approach based on a spring-mass-damper model, to create a topological map of the environment. Once the map is built, we propose the use of a Monte Carlo localization approach to estimate the most probable pose of the vision system and its trajectory within the map. We perform a comparison in terms of computational cost and error in localization. The experimental results we present have been obtained with real indoor omnidirectional images.


Assuntos
Arquitetura , Sistemas de Informação Geográfica , Interpretação de Imagem Assistida por Computador/métodos , Mapas como Assunto , Método de Monte Carlo , Arquitetura/instrumentação , Arquitetura/métodos , Inteligência Artificial , Simulação por Computador , Sistemas de Informação Geográfica/instrumentação , Humanos , Interpretação de Imagem Assistida por Computador/instrumentação , Modelos Biológicos , Modelos Teóricos , Reconhecimento Automatizado de Padrão/métodos , Reconhecimento Automatizado de Padrão/estatística & dados numéricos , Robótica/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...